Goto

Collaborating Authors

 collision detection



A $1000\times$ Faster LLM-enhanced Algorithm For Path Planning in Large-scale Grid Maps

Zeng, Junlin, Zhang, Xin, Zhao, Xiang, Pan, Yan

arXiv.org Artificial Intelligence

Path planning in grid maps, arising from various applications, has garnered significant attention. Existing methods, such as A*, Dijkstra, and their variants, work well for small-scale maps but fail to address large-scale ones due to high search time and memory consumption. Recently, Large Language Models (LLMs) have shown remarkable performance in path planning but still suffer from spatial illusion and poor planning performance. Among all the works, LLM-A* \cite{meng2024llm} leverages LLM to generate a series of waypoints and then uses A* to plan the paths between the neighboring waypoints. In this way, the complete path is constructed. However, LLM-A* still suffers from high computational time for large-scale maps. To fill this gap, we conducted a deep investigation into LLM-A* and found its bottleneck, resulting in limited performance. Accordingly, we design an innovative LLM-enhanced algorithm, abbr. as iLLM-A*. iLLM-A* includes 3 carefully designed mechanisms, including the optimization of A*, an incremental learning method for LLM to generate high-quality waypoints, and the selection of the appropriate waypoints for A* for path planning. Finally, a comprehensive evaluation on various grid maps shows that, compared with LLM-A*, iLLM-A* \textbf{1) achieves more than $1000\times$ speedup on average, and up to $2349.5\times$ speedup in the extreme case, 2) saves up to $58.6\%$ of the memory cost, 3) achieves both obviously shorter path length and lower path length standard deviation.}


Robust Differentiable Collision Detection for General Objects

Chen, Jiayi, Zhao, Wei, Ruan, Liangwang, Chen, Baoquan, Wang, He

arXiv.org Artificial Intelligence

Collision detection is a core component of robotics applications such as simulation, control, and planning. Traditional algorithms like GJK+EPA compute witness points (i.e., the closest or deepest-penetration pairs between two objects) but are inherently non-differentiable, preventing gradient flow and limiting gradient-based optimization in contact-rich tasks such as grasping and manipulation. Recent work introduced efficient first-order randomized smoothing to make witness points differentiable; however, their direction-based formulation is restricted to convex objects and lacks robustness for complex geometries. In this work, we propose a robust and efficient differentiable collision detection framework that supports both convex and concave objects across diverse scales and configurations. Our method introduces distance-based first-order randomized smoothing, adaptive sampling, and equivalent gradient transport for robust and informative gradient computation. Experiments on complex meshes from DexGraspNet and Objaverse show significant improvements over existing baselines. Finally, we demonstrate a direct application of our method for dexterous grasp synthesis to refine the grasp quality. The code is available at https://github.com/JYChen18/DiffCollision.


Contact Sensing via Joint Torque Sensors and a Force/Torque Sensor for Legged Robots

Grinberg, Jared, Ding, Yanran

arXiv.org Artificial Intelligence

This paper presents a method for detecting and localizing contact along robot legs using distributed joint torque sensors and a single hip-mounted force-torque (FT) sensor using a generalized momentum-based observer framework. We designed a low-cost strain-gauge-based joint torque sensor that can be installed on every joint to provide direct torque measurements, eliminating the need for complex friction models and providing more accurate torque readings than estimation based on motor current. Simulation studies on a floating-based 2-DoF robot leg verified that the proposed framework accurately recovers contact force and location along the thigh and shin links. Through a calibration procedure, our torque sensor achieved an average 96.4% accuracy relative to ground truth measurements. Building upon the torque sensor, we performed hardware experiments on a 2-DoF manipulator, which showed sub-centimeter contact localization accuracy and force errors below 0.2 N.


Scalable Multi-Agent Path Finding using Collision-Aware Dynamic Alert Mask and a Hybrid Execution Strategy

Muppasani, Bharath, Dey, Ritirupa, Srivastava, Biplav, Narayanan, Vignesh

arXiv.org Artificial Intelligence

Multi-agent pathfinding (MAPF) remains a critical problem in robotics and autonomous systems, where agents must navigate shared spaces efficiently while avoiding conflicts. Traditional centralized algorithms that have global information, such as Conflict-Based Search (CBS), provide high-quality solutions but become computationally expensive in large-scale scenarios due to the combinatorial explosion of conflicts that need resolution. Conversely, distributed approaches that have local information, particularly learning-based methods, offer better scalability by operating with relaxed information availability, yet often at the cost of solution quality. To address these limitations, we propose a hybrid framework that combines decentralized path planning with a lightweight centralized coordinator. Our framework leverages reinforcement learning (RL) for decentralized planning, enabling agents to adapt their planning based on minimal, targeted alerts--such as static conflict-cell flags or brief conflict tracks--that are dynamically shared information from the central coordinator for effective conflict resolution. We empirically study the effect of the information available to an agent on its planning performance. Our approach reduces the inter-agent information sharing compared to fully centralized and distributed methods, while still consistently finding feasible, collision-free solutions--even in large-scale scenarios having higher agent counts.



A Convex Formulation of Compliant Contact between Filaments and Rigid Bodies

Li, Wei-Chen, Chou, Glen

arXiv.org Artificial Intelligence

Abstract-- We present a computational framework for simulating filaments interacting with rigid bodies through contact. Filaments are challenging to simulate due to their codimen-sionality, i.e., they are one-dimensional structures embedded in three-dimensional space. Existing methods often assume that filaments remain permanently attached to rigid bodies. Our framework unifies discrete elastic rod (DER) modeling, a pressure field patch contact model, and a convex contact formulation to accurately simulate frictional interactions between slender filaments and rigid bodies - capabilities not previously achievable. Owing to the convex formulation of contact, each time step can be solved to global optimality, guaranteeing complementarity between contact velocity and impulse. Finally, we demonstrate its applicability in both soft robotics, such as a stochastic filament-based gripper, and deformable object manipulation, such as shoelace tying, providing a versatile simulator for systems involving complex filament-filament and filament-rigid body interactions.


How Fly Neural Perception Mechanisms Enhance Visuomotor Control of Micro Robots

Liu, Renyuan, Zhou, Haoting, Fang, Chuankai, Fu, Qinbing

arXiv.org Artificial Intelligence

Anyone who has tried to swat a fly has likely been frustrated by its remarkable agility.This ability stems from its visual neural perception system, particularly the collision-selective neurons within its small brain.For autonomous robots operating in complex and unfamiliar environments, achieving similar agility is highly desirable but often constrained by the trade-off between computational cost and performance.In this context, insect-inspired intelligence offers a parsimonious route to low-power, computationally efficient frameworks.In this paper, we propose an attention-driven visuomotor control strategy inspired by a specific class of fly visual projection neurons-the lobula plate/lobula column type-2 (LPLC2)-and their associated escape behaviors.To our knowledge, this represents the first embodiment of an LPLC2 neural model in the embedded vision of a physical mobile robot, enabling collision perception and reactive evasion.The model was simplified and optimized at 70KB in memory to suit the computational constraints of a vision-based micro robot, the Colias, while preserving key neural perception mechanisms.We further incorporated multi-attention mechanisms to emulate the distributed nature of LPLC2 responses, allowing the robot to detect and react to approaching targets both rapidly and selectively.We systematically evaluated the proposed method against a state-of-the-art locust-inspired collision detection model.Results showed that the fly-inspired visuomotor model achieved comparable robustness, at success rate of 96.1% in collision detection while producing more adaptive and elegant evasive maneuvers.Beyond demonstrating an effective collision-avoidance strategy, this work highlights the potential of fly-inspired neural models for advancing research into collective behaviors in insect intelligence.


NeuralSVCD for Efficient Swept Volume Collision Detection

Son, Dongwon, Jung, Hojin, Kim, Beomjoon

arXiv.org Artificial Intelligence

Robot manipulation in unstructured environments requires efficient and reliable Swept Volume Collision Detection (SVCD) for safe motion planning. Traditional discrete methods potentially miss collisions between these points, whereas SVCD continuously checks for collisions along the entire trajectory. Existing SVCD methods typically face a trade-off between efficiency and accuracy, limiting practical use. In this paper, we introduce NeuralSVCD, a novel neural encoder-decoder architecture tailored to overcome this trade-off. Our approach leverages shape locality and temporal locality through distributed geometric representations and temporal optimization. This enhances computational efficiency without sacrificing accuracy. Comprehensive experiments show that NeuralSVCD consistently outperforms existing state-of-the-art SVCD methods in terms of both collision detection accuracy and computational efficiency, demonstrating its robust applicability across diverse robotic manipulation scenarios. Code and videos are available at https://neuralsvcd.github.io/.


Tactile Gesture Recognition with Built-in Joint Sensors for Industrial Robots

Song, Deqing, Yang, Weimin, Rezayati, Maryam, van de Venn, Hans Wernher

arXiv.org Artificial Intelligence

-- While gesture recognition using vision or robot skins is an active research area in Human-Robot Collaboration (HRC), this paper explores deep learning methods relying solely on a robot's built-in joint sensors, eliminating the need for external sensors. We evaluated various convolutional neural network (CNN) architectures and collected two datasets to study the impact of data representation and model architecture on the recognition accuracy. Our results show that spectrogram-based representations significantly improve accuracy, while model architecture plays a smaller role. We also tested generalization to new robot poses, where spectrogram-based models performed better . Implemented on a Franka Emika Research robot, two of our methods, STFT2DCNN and STT3DCNN, achieved over 95% accuracy in contact detection and gesture classification. These findings demonstrate the feasibility of external-sensor-free tactile recognition and promote further research toward cost-effective, scalable solutions for HRC. I. INTRODUCTION Transiting from Industry 4.0 to Industry 5.0, the industry is putting more emphasis on placing the well-being of the industry workers at the center of the production process [1], [2], [3].